• DOMAIN: Semiconductor manufacturing process

• CONTEXT: A complex modern semiconductor manufacturing process is normally under constant surveillance via the monitoring of signals/ variables collected from sensors and or process measurement points. However, not all of these signals are equally valuable in a specific monitoring system. The measured signals contain a combination of useful information, irrelevant information as well as noise. Engineers typically have a much larger number of signals than are actually required. If we consider each type of signal as a feature, then feature selection may be applied to identify the most relevant signals. The Process Engineers may then use these signals to determine key factors contributing to yield excursions downstream in the process. This will enable an increase in process throughput, decreased time to learning and reduce the per unit production costs. These signals can be used as features to predict the yield type. And by analysing and trying out different combinations of features, essential signals that are impacting the yield type can be identified

• DATA DESCRIPTION: sensor-data.csv : (1567, 592) The data consists of 1567 examples each with 591 features. The dataset presented in this case represents a selection of such features where each example represents a single production entity with associated measured features and the labels represent a simple pass/fail yield for in house line testing. Target column “ –1” corresponds to a pass and “1” corresponds to a fail and the data time stamp is for that specific test point

• PROJECT OBJECTIVE: We will build a classifier to predict the Pass/Fail yield of a particular process entity and analyse whether all the features are required to build the model or not

Steps and tasks:

1. Import and explore the data.

2. Data cleansing:

we will drop all the columns where null values are greater than 200

we have removed 52 columnes which having null count more than two hunderd

Note : The dataset is imbalance So , we need to balance it

Now we were able have 309 columns and 1567 roes

3.Data analysis & visualisation:

NOTE : this Four are not important for us we will drop them ['5', '42', '49', '69']

NOTE :

  1. Accauracy is great but F1 ,recall and precision is very bad
  2. Deafult parameters are not performing good so we need to hypertune

conclusions

  1. No model is performing great because there is no correct way to select the columns thare imp
  2. Tried various ways for selecting columns still no good results
  3. performed hypertuing still no good results
  4. Handling imbalance using upsampling did give good results
  5. downsampling didnt try
  6. used Pca of 5 components still no imporvements
  7. over all when hypertuneed Logistic Regressing is performing well than all other models